Two weeks ago there was a SONORUS meeting and school in Naples and Sorrento, Italy. The topic of the school was 'Computational Soundscape Analysis'. The next meeting will be in October in Gothenburg, Sweden.
Paper at Euronoise
The timetable for Euronoise 2015 is available. I will present the paper Determining an Empirical Emission Model for the Auralization of Jet Aircraft on Tuesday the second at nine in the morning.
Update
Seems like my presentation is moved to 10:20.
Slides on Zenodo
As posted earlier I've been uploading to Zenodo almost all slides, posters and proceedings that I made during my project. This includes the slides (as well as two auralizations) of the presentation I gave at the 168th ASA meeting in Indianapolis last year.
Auralization: Doppler effect
The goal of the project that I'm working on is to develop a tool to synthesize how it sounds when an aircraft flies over an urban environment. One part of the project deals with the propagation model, which also includes the Doppler shift. Let's have a look now at how we can include the Doppler shift in auralizations.
Doppler shift equation¶
The following equation is likely known to you. $$ f = \frac{c + v_r}{c + v_s} f_0 $$ You put in the emission frequency $f_0$, the speed of sound $c$, and the velocity of respectively receiver and source $v_r$ and $v_s$. What you get out is the Doppler shifted frequency.
In the time-domain the Doppler shift is implicitly included when considering the propagation delay between source and receiver. It takes $d = s/c$ seconds to travel the source-receiver distance $s$ at the speed of sound $c$. When the distance changes over time, the emission is Doppler shifted.
Applying the Doppler shift to a tone¶
As an example we will now apply the Doppler shift to a pure tone. Let's start of with some imports.
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from acoustics import Signal
from IPython.display import Audio
import warnings
warnings.filterwarnings('ignore')
As you can see I'm importing the Signal class from the acoustics package. This package is mainly a collection of tools that could be useful to acousticians, and I started this package because I need many of these tools for my research, and I'm sure others can use it as well. The code base is still a bit messy though, and things do break occasionally.
We start with generating a pure tone.
duration = 10.0
fs = 44100.0
frequency = 5000.0
samples = int(fs*duration)
times = np.arange(samples) / fs
signal = Signal(np.sin(2.0*np.pi*frequency*times), fs=fs)
As I wrote before the Signal class can be found in the acoustics package. This class is a subclass of the np.ndarray and requires besides the array also a sample frequency. The class has some methods for common operations, like calculating different types of spectra and plotting them, as well as calculating other acoustic quantities like the equivalent sound pressure level.
Audio(signal, rate=signal.fs)
fig = signal.spectrogram(ylim=(0.0, 10000.0))
In order to create a Doppler shift we need a moving source or receiver. In this case, let's consider a moving source and a non-moving receiver. The source will move straight over the receiver.
velocity = 70.0
x = times * velocity
x -= x.max()/2.0
y = np.zeros(samples)
z = 100.0 * np.ones(samples)
position_source = np.vstack((x,y,z)).T
position_receiver = np.zeros(3)
distance = np.linalg.norm((position_source - position_receiver), axis=-1)
The exact speed of sound in the atmosphere depends on conditions of the atmosphere, like the temperature, pressure, humidity. Furthermore, the speed of sound can vary spatially and temporally due to e.g. wind or temperature gradients, or turbulence. Let's assume for now a constant speed of sound of 343.0 m/s.
soundspeed = 343.0
The propagation delay from source to receiver is given simply by
delay = distance / soundspeed
Let's have a look at the delay as function of time.
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(times, delay)
ax.set_xlabel('$t$ in s')
ax.set_ylabel('delay in s')
Now that we've calculated the delay we need to apply this delay to our signal of interest, which is a matter of shifting the samples. However, since we have a discrete signal, and the delays are not integer multiples of the sample time, an interpolation scheme is needed.
Linear interpolation¶
The easiest and likely fastest method is a linear interpolation.
def interpolation_linear(signal, times, fs):
"""Linear interpolation of `signal` at `times`.
:param signal: Signal.
:param times: Times to sample at.
:param fs: Sample frequency.
"""
k_r = np.arange(0, len(signal), 1) # Create vector of indices
k = k_r - times * fs # Create vector of warped indices
kf = np.floor(k).astype(int) # Floor the warped indices. Convert to integers so we can use them as indices.
R = ( (1.0-k+kf) * signal[kf] + (k-kf) * signal[kf+1] ) * (kf >= 0) #+ 0.0 * (kf<0)
return R
Let's apply the delay and listen.
delayed_linear = Signal( interpolation_linear(signal, delay, signal.fs), signal.fs)
Audio(delayed_linear, rate=delayed_linear.fs)
You can hear the Doppler shift clearly but...there are artifacts! The artifacts are also clearly visible in the spectrogram shown below. The red line shows the Doppler shifted frequency and the other lines that are visible are the artifacts. The artifacts are at least 30 dB less strong than the tone of interest, but we can still clearly hear them since there is no signal besides the tone.
fig = delayed_linear.spectrogram(ylim=(0.0, 10000.0))
In most auralizations I create these artifacts are masked. However, there are ways to reduce them. One way would be to upsample the signal before applying this resampling. Another way would be to consider another interpolation scheme.
Lanczos interpolation¶
Theoretically, the optimal reconstruction filter is a sinc filter. In practice only approximations of this filter can be used, and these approximations are generally achieved by windowing and truncating the sinc function. One of these approximations is the Lanczos filter or kernel, which is the sinc function windowed by another sinc function. For more information on Lanczos interpolation I can recommend the excellent Wikpedia article on this topic.
The following code shows an implementation of Lanczos interpolation in one dimension. In pure Python this would be too slow so I'm using here the Numba just-in-time compiler.
import numba
import math
@numba.jit(nogil=True)
def sinc(x):
if x == 0:
return 1.0
else:
return math.sin(x*math.pi) / (x*math.pi)
@numba.jit(nogil=True)
def _lanczos_window(x, a):
if -a < x and x < a:
return sinc(x) * sinc(x/a)
else:
return 0.0
@numba.jit(nogil=True)
def _lanczos_resample(signal, samples, output, a):
"""Sample signal at float samples.
"""
for index, x in enumerate(samples):
if x >= 0.0 and x < len(signal):
for i in range(math.floor(x)-a+1, math.floor(x+a)):
if i >= 0 and i < len(signal):
output[index] += signal[i] * _lanczos_window(x-i, a)
return output
def interpolation_lanczos(signal, times, fs, a=10):
"""Lanczos interpolation of `signal` at `times`.
:param signal: Signal.
:param times: Times to sample at.
:param fs: Sample frequency.
:param kernelsize: Size of Lanczos kernel :math:`a`.
"""
samples = -times * fs + np.arange(len(signal))
return _lanczos_resample(signal, samples, np.zeros_like(signal), a)
Let's apply again the delay but now using this filter. Note that I use the default kernelsize, which is quite arbitrary.
delayed_lanczos = Signal(interpolation_lanczos(signal, delay, signal.fs), signal.fs)
Audio(delayed_lanczos, rate=delayed_lanczos.fs)
The artifacts that were present with linear interpolation are mostly gone now, however, when the change in delay is large, other artifacts are audible.
fig = delayed_lanczos.spectrogram(ylim=(0.0, 10000.0))
Conclusions¶
The Doppler effect is an interesting physical phenomenon, one that we hear perhaps daily. I've shown how you can synthesize the Doppler shift and considered two interpolation schemes. Which method is preferable depends on your use case. For higher frequencies Lanczos interpolation results in fewer artifacts than linear interpolation. However, computationally Lanczos interpolation is much more demanding:
%timeit interpolation_linear(signal, delay, signal.fs)
%timeit interpolation_lanczos(signal, delay, signal.fs)
Research visit to Chalmers
After the ASA meeting in Indianapolis I went to Gothenburg again for a research visit. As I wrote in my previous post it was strange to be there again. Last spring I visited Gothenburg for just a couple of days, but now, staying for seven weeks again is quite different. You start noticing quite a lot of things have changed. Many people I knew there have moved away and I also noticed a couple of restaurants I visited have closed since. Despite those things it was nice meeting old friends again.
So, as I wrote in the title I went there on a research visit. The main reason for my visit was to attend some courses at Chalmers but to also work on my research. Initially the idea was to investigate further the model we're developing to deal with atmospheric turbulence in aircraft auralisations. However, in the end the courses took more time then I expected, so I didn't really get to work on the turbulence model anymore. However, I did manage to implement one nice part of the auralisation tool, namely support for Ambisonics.
Besides Chalmers I also visited SP in Borås. SP was a familar place for me as well since I did an internship there in 2010. Earlier I had made recordings with a SoundField microphone, however, at Empa we do not have an Ambisonics setup. SP has a listening room with a third order Ambisonics system, giving me the possibility to listen to the first order recording as well as to higher order auralisations.
One visit I met up with Peter Lundén who installed the system. After some problem solving we managed to get both the recordings and auralisations working. Unfortunately there was one problem with the auralisations, somehow halfway the aircraft makes a U-turn. While I haven't checked it any further, I think it is due to the inclusion of the Condon-Shortley phase in the function I used to calculate the spherical harmonics.
168th ASA meeting
Several weeks ago I was in Indianapolis for the 168th ASA meeting. The main reason for me to go to this conference was to meet fellow VASTCON members.
The conference was quite a bit different from the one I attended in Krakow. There were spaces available that one could work, including internet connectivity, and the chairs were actually comfortable. Basically every evening there were activities, ranging from a visit to a museum to a Halloween party. While the conference was small there were some interesting special sessions, like the session on hot topics in acoustics and the session on acoustics and education. From the technical sessions I especially liked the session on meta-materials and sonic crystals.
Thursday morning there was a VASTCON meeting and in the afternoon was the VASTCON-organized session on virtual acoustics. During this session I gave a presentation on how to include the effects of turbulence on sound propagation for auralization of aircraft noise.
On Friday me and some others of VASTCON visited the Herrick Labs at Purdue University. There they had a very interesting device they used to measure the noise from a rolling tyre. The device is basically a circularly shaped road with two wheels on arms rotating rapidly moving over the road.
After a (very) short stay in The Netherlands I'm in Gothenburg again at Chalmers. Strange to be there again!
Posters and slides about my project
Publishing data: ZENODO and figshare
Searching for a way to publish scientific data I found two interesting platforms: figshare and ZENODO. I've made a quick comparison between the two platforms. For me the clear winner is ZENODO.